31 research outputs found

    Electrocardiographic Screening of Arrhythmogenic Cardiomyopathy in Genotype-Positive and Phenotype-Negative Relatives

    Get PDF
    Background: Arrhythmogenic cardiomyopathy is a hereditary cause of ventricular arrhythmias and sudden death. Identifying the healthy genetic carriers who will develop the disease remains a challenge. A novel approach to the analysis of the digital electrocardiograms of mutation carriers through signal processing may identify early electrocardiographic abnormalities. Methods: A retrospective case–control study included a population of healthy genetics carriers and their wild-type relatives. Genotype-positive/phenotype-negative individuals bore mutations associated with the development of arrhythmogenic cardiomyopathy. The relatives included had a non-pathological 12-lead electrocardiogram, echocardiogram, and a cardiac magnetic resonance. Automatic digital electrocardiographic analyses comprised QRS and terminal activation delay duration, the number of QRS fragmentations, ST slope, and T-wave voltage. Results: Digital 12-lead electrocardiograms from 41 genotype-positive/ phenotype-negative (29 simple carriers and 12 double mutation carriers) and 73 wild-type relatives were analyzed. No differences in the QRS length, the number of QRS fragmentations, and the voltage of the T-wave were observed. After adjusting for potential confounders, double carriers showed an average ST-slope flatter than those of the simple carriers and wild type [5.18° (0.73–8.01), 7.15° (5.14–11.05), and 11.46° (3.94–17.49), respectively, p = 0.005]. There was a significant negative correlation between the ST slope and the age in genotype-positive/phenotype-negative relatives (r = 0.376, p = 0.021) not observed in their wild-type counterparts (r = 0.074, p = 0.570). Conclusions: A flattened ST segment may be an early sign of electrical remodeling that precedes T-wave inversion in healthy genetic carriers. A thorough analysis of the digital electrocardiographic signal may help identify and measure early electrical abnormalities

    Computer versus cardiologist: Is a machine learning algorithm able to outperform an expert in diagnosing a phospholamban p.Arg14del mutation on the electrocardiogram?

    Get PDF
    Background Phospholamban (PLN) p.Arg14del mutation carriers are known to develop dilated and/or arrhythmogenic cardiomyopathy, and typical electrocardiographic (ECG) features have been identified for diagnosis. Machine learning is a powerful tool used in ECG analysis and has shown to outperform cardiologists. Objectives We aimed to develop machine learning and deep learning models to diagnose PLN p.Arg14del cardiomyopathy using ECGs and evaluate their accuracy compared to an expert cardiologist. Methods We included 155 adult PLN mutation carriers and 155 age- and sex-matched control subjects. Twenty-one PLN mutation carriers (13.4%) were classified as symptomatic (symptoms of heart failure or malignant ventricular arrhythmias). The data set was split into training and testing sets using 4-fold cross-validation. Multiple models were developed to discriminate between PLN mutation carriers and control subjects. For comparison, expert cardiologists classified the same data set. The best performing models were validated using an external PLN p.Arg14del mutation carrier data set from Murcia, Spain (n = 50). We applied occlusion maps to visualize the most contributing ECG regions. Results In terms of specificity, expert cardiologists (0.99) outperformed all models (range 0.53–0.81). In terms of accuracy and sensitivity, experts (0.28 and 0.64) were outperformed by all models (sensitivity range 0.65–0.81). T-wave morphology was most important for classification of PLN p.Arg14del carriers. External validation showed comparable results, with the best model outperforming experts. Conclusion This study shows that machine learning can outperform experienced cardiologists in the diagnosis of PLN p.Arg14del cardiomyopathy and suggests that the shape of the T wave is of added importance to this diagnosis

    On the Differential Analysis of Enterprise Valuation Methods as a Guideline for Unlisted Companies Assessment (I): Empowering Discounted Cash Flow Valuation

    Get PDF
    The Discounted Cash Flow (DCF) method is probably the most extended approach used in company valuation, its main drawbacks being probably the known extreme sensitivity to key variables such asWeighted Average Cost of Capital (WACC) and Free Cash Flow (FCF) estimations not unquestionably obtained. In this paper we propose an unbiased and systematic DCF method which allows us to value private equity by leveraging on stock markets evidences, based on a twofold approach: First, the use of the inverse method assesses the existence of a coherentWACC that positively compares with market observations; second, different FCF forecasting methods are benchmarked and shown to correspond with actual valuations. We use financial historical data including 42 companies in five sectors, extracted from Eikon-Reuters. Our results show that WACC and FCF forecasting are not coherent with market expectations along time, with sectors, or with market regions, when only historical and endogenous variables are taken into account. The best estimates are found when exogenous variables, operational normalization of input space, and data-driven linear techniques are considered (Root Mean Square Error of 6.51). Our method suggests that FCFs and their positive alignment with Market Capitalization and the subordinate enterprise value are the most influencing variables. The fine-tuning of the methods presented here, along with an exhaustive analysis using nonlinear machine-learning techniques, are developed and discussed in the companion paper

    Opening the 21st Century Technologies to Industries: On the Special Issue Machine Learning for Society

    Get PDF
    Machine learning techniques, more commonly known today as artificial intelligence, are playing an increasingly important role in all aspects of our lives. Their applications extend to all areas of society where similar techniques can be accommodated to provide efficient and interesting solutions to a wide range of problems. In this Special Issue entitled Machine Learning for Society [1], we present some examples of the applications of this type of technique. From the valuation of unlisted companies to the characterization of clients, through the detection of financial crises or the prediction of the behavior of the exchange rate, this group of works presented here has in common the search for efficient solutions based on a set of historical data, and the application of artificial intelligence techniques. The techniques and datasets used, as well as the relevant findings developed in the different articles of this Special Issue, are summarized below

    On the Differential Analysis of Enterprise Valuation Methods as a Guideline for Unlisted Companies Assessment (II): Applying Machine-Learning Techniques for Unbiased Enterprise Value Assessment

    Get PDF
    The search for an unbiased company valuation method to reduce uncertainty, whether or not it is automatic, has been a relevant topic in social sciences and business development for decades. Many methods have been described in the literature, but consensus has not been reached. In the companion paper we aimed to review the assessment capabilities of traditional company valuation model, based on company’s intrinsic value using the Discounted Cash Flow (DCF). In this paper, we capitalized on the potential of exogenous information combined with Machine Learning (ML) techniques. To do so, we performed an extensive analysis to evaluate the predictive capabilities with up to 18 different ML techniques. Endogenous variables (features) related to value creation (DCF) were proved to be crucial elements for the models, while the incorporation of exogenous, industry/country specific ones, incrementally improves the ML performance. Bagging Trees, Supported Vector Machine Regression, Gaussian Process Regression methods consistently provided the best results. We concluded that an unbiased model can be created based on endogenous and exogenous information to build a reference framework, to price and benchmark Enterprise Value for valuation and credit risk assessment

    Sentiment Analysis of Political Tweets From the 2019 Spanish Elections

    Get PDF
    The use of sentiment analysis methods has increased in recent years across a wide range of disciplines. Despite the potential impact of the development of opinions during political elections, few studies have focused on the analysis of sentiment dynamics and their characterization from statistical and mathematical perspectives. In this paper, we apply a set of basic methods to analyze the statistical and temporal dynamics of sentiment analysis on political campaigns and assess their scope and limitations. To this end, we gathered thousands of Twitter messages mentioning political parties and their leaders posted several weeks before and after the 2019 Spanish presidential election. We then followed a twofold analysis strategy: (1) statistical characterization using indices derived from well-known temporal and information metrics and methods –including entropy, mutual information, and the Compounded Aggregated Positivity Index– allowing the estimation of changes in the density function of sentiment data; and (2) feature extraction from nonlinear intrinsic patterns in terms of manifold learning using autoencoders and stochastic embeddings. The results show that both the indices and the manifold features provide an informative characterization of the sentiment dynamics throughout the election period. We found measurable variations in sentiment behavior and polarity across the political parties and their leaders and observed different dynamics depending on the parties’ positions on the political spectrum, their presence at the regional or national levels, and their nationalist or globalist aspirations

    Manifold analysis of the P-wave changes induced by pulmonary vein isolation during cryoballoon procedure

    Get PDF
    Background/Aim: In atrial fibrillation (AF) ablation procedures, it is desirable to know whether a proper disconnection of the pulmonary veins (PVs) was achieved. We hypothesize that information about their isolation could be provided by analyzing changes in P-wave after ablation. Thus, we present a method to detect PV disconnection using P-wave signal analysis. Methods: Conventional P-wave feature extraction was compared to an automatic feature extraction procedure based on creating low-dimensional latent spaces for cardiac signals with the Uniform Manifold Approximation and Projection (UMAP) method. A database of patients (19 controls and 16 AF individuals who underwent a PV ablation procedure) was collected. Standard 12-lead ECG was recorded, and P-waves were segmented and averaged to extract conventional features (duration, amplitude, and area) and their manifold representations provided by UMAP on a 3-dimensional latent space. A virtual patient was used to validate these results further and study the spatial distribution of the extracted characteristics over the whole torso surface. Results: Both methods showed differences between P-wave before and after ablation. Conventional methods were more prone to noise, P-wave delineation errors, and inter-patient variability. P-wave differences were observed in the standard leads recordings. However, higher differences appeared in the torso region over the precordial leads. Recordings near the left scapula also yielded noticeable differences. Conclusions: P-wave analysis based on UMAP parameters detects PV disconnection after ablation in AF patients and is more robust than heuristic parameterization. Moreover, additional leads different from the standard 12-lead ECG should be used to detect PV isolation and possible future reconnections better

    Cybersecurity Alert Prioritization in a Critical High Power Grid With Latent Spaces

    Get PDF
    High-Power electric grid networks require extreme security in their associated telecommunication network to ensure protection and control throughout power transmission. Accordingly, supervisory control and data acquisition systems form a vital part of any critical infrastructure, and the safety of the associated telecommunication network from intrusion is crucial. Whereas events related to operation and maintenance are often available and carefully documented, only some tools have been proposed to discriminate the information dealing with the heterogeneous data from intrusion detection systems and to support the network engineers. In this work, we present the use of deep learning techniques, such as Autoencoders or conventional Multiple Correspondence Analysis, to analyze and prune the events on power communication networks in terms of categorical data types often used in anomaly and intrusion detection (such as addresses or anomaly description). This analysis allows us to quantify and statistically describe highseverity events. Overall, portions of alerts around 5-10% have been prioritized in the analysis as first to handle by managers. Moreover, probability clouds of alerts have been shown to configure explicit manifolds in latent spaces. These results offer a homogeneous framework for implementing anomaly detection prioritization in power communication networks

    Anomaly Detection from Low-dimensional Latent Manifolds with Home Environmental Sensors

    Get PDF
    Human Activity Recognition poses a significant challenge within Active and Assisted Living (AAL) systems, relying extensively on ubiquitous environmental sensor-based acquisition devices to detect user situations in their daily living. Environmental measurement systems deployed indoors yield multiparametric data in heterogeneous formats, which presents a challenge for developing Machine Learning-based AAL models. We hypothesized that anomaly detection algorithms could be effectively employed to create data-driven models for monitoring home environments and that the complex multiparametric indoor measurements can often be represented by a relatively small number of latent variables generated through Manifold Learning (MnL) techniques. We examined both linear (Principal Component Analysis) and non-linear (AutoEncoders) techniques for generating these latent spaces and the utility of core domain detection techniques for identifying anomalies within the resulting low-dimensional manifolds. We benchmarked this approach using three publicly available datasets (hh105, Aruba, and Tulum) and one proprietary dataset (Elioth) for home environmental monitoring. Our results demonstrated the following key findings: (a) Nonlinear manifold estimation techniques offer significant advantages in retrieving latent variables when compared to linear techniques; (b) The quality of the reconstruction of the original multidimensional recordings serves as an acceptable indicator of the quality of the generated latent spaces; (c) Domain detection identifies regions of normality consistent with typical individual activities in these spaces; And (d) the system effectively detects deviations from typical activity patterns and labels anomalies. This study lays the groundwork for further exploration of enhanced methods for extracting information from MnL data models and their application within the AAL and possibly other sectors

    On the Statistical and Temporal Dynamics of Sentiment Analysis

    Get PDF
    Despite the broad interest and use of sentiment analysis nowadays, most of the conclusions in current literature are driven by simple statistical representations of sentiment scores. On that basis, the generated sentiment evaluation consists nowadays of encoding and aggregating emotional information from a number of individuals and their populational trends. We hypothesized that the stochastic processes aimed to be measured by sentiment analysis systems will exhibit nontrivial statistical and temporal properties. We established an experimental setup consisting of analyzing the short text messages (tweets) of 6 user groups with different nature (universities, politics, musicians, communication media, technological companies, and financial companies), including in each group ten high-intensity users in their regular generation of traffic on social networks. Statistical descriptors were checked to converge at about 2000 messages for each user, for which messages from the last two weeks were compiled using a custom-made tool. The messages were subsequently processed for sentiment scoring in terms of different lexicons currently available and widely used. Not only the temporal dynamics of the resulting score time series per user was scrutinized, but also its statistical description as given by the score histogram, the temporal autocorrelation, the entropy, and the mutual information. Our results showed that the actual dynamic range of lexicons is in general moderate, and hence not much resolution is given within their end-of-scales. We found that seasonal patterns were more present in the time evolution of the number of tweets, but to a much lesser extent in the sentiment intensity. Additionally, we found that the presence of retweets added negligible effects over standard statistical modes, while it hindered informational and temporal patterns. The innovative Compounded Aggregated Positivity Index developed in this work proved to be characteristic for industries and at ..
    corecore